10 research outputs found

    Mapping and Localization in Urban Environments Using Cameras

    Get PDF
    In this work we present a system to fully automatically create a highly accurate visual feature map from image data aquired from within a moving vehicle. Moreover, a system for high precision self localization is presented. Furthermore, we present a method to automatically learn a visual descriptor. The map relative self localization is centimeter accurate and allows autonomous driving

    UNICARagil - Disruptive Modular Architectures for Agile, Automated Vehicle Concepts

    Get PDF
    This paper introduces UNICARagil, a collaborative project carried out by a consortium of seven German universities and six industrial partners, with funding provided by the Federal Ministry of Education and Research of Germany. In the scope of this project, disruptive modular structures for agile, automated vehicle concepts are researched and developed. Four prototype vehicles of different characteristics based on the same modular platform are going to be build up over a period of four years. The four fully automated and driverless vehicles demonstrate disruptive architectures in hardware and software, as well as disruptive concepts in safety, security, verification and validation. This paper outlines the most important research questions underlying the project

    Mapping and Localization in Urban Environments Using Cameras

    No full text
    In this work we present a system to fully automatically create a highly accurate visual feature map from image data aquired from within a moving vehicle. Moreover, a system for high precision self localization is presented. Furthermore, we present a method to automatically learn a visual descriptor. The map relative self localization is centimeter accurate and allows autonomous driving

    City GPS using stereo vision

    No full text
    Next generation driver assistance systems require a precise localization. However, global navigation satellite systems (GNSS) often exhibit a paucity of accuracy due to shadowing effects in street canyon like scenarios rendering this solution insufficient for many tasks. Alternatively 3D laser scanners can be utilized to localize the vehicle within a previously recorded 3D map. These scanners however, are expensive and bulky hampering a wide spread use. Herein we propose to use stereo cameras to localize the ego vehicle within a previously computed visual 3D map. The proposed localization solution is low cost, precise and runs in real time. The map is computed once and kept fixed thereafter using cameras as sole sensors without GPS readings. The presented mapping algorithm is largely inspired by current state of the art simultaneous localization and mapping (SLAM) methods. Moreover, the map merely consists of a sparse set of landmark points keeping the map storage manageably low

    Visual Odometry based on Stereo Image Sequences with RANSAC-based Outlier Rejection Scheme

    Get PDF
    Abstract — A common prerequisite for many vision-based driver assistance systems is the knowledge of the vehicle’s own movement. In this paper we propose a novel approach for estimating the egomotion of the vehicle from a sequence of stereo images. Our method is directly based on the trifocal geometry between image triples, thus no time expensive recovery of the 3-dimensional scene structure is needed. The only assumption we make is a known camera geometry, where the calibration may also vary over time. We employ an Iterated Sigma Point Kalman Filter in combination with a RANSAC-based outlier rejection scheme which yields robust frame-to-frame motion estimation even in dynamic environments. A high-accuracy inertial navigation system is used to evaluate our results on challenging real-world video sequences. Experiments show that our approach is clearly superior compared to other filtering techniques in terms of both, accuracy and run-time. I

    Visual SLAM for autonomous ground vehicles

    No full text
    and Visual SLAM (V-SLAM) in particular have been an active area of research lately. In V-SLAM the main focus is most often laid on the localization part of the problem allowing for a drift free motion estimate. To this end, a sparse set of landmarks is tracked and their position is estimated. However, this set of landmarks (rendering the map) is often too sparse for tasks in autonomous driving such as navigation, path planning, obstacle avoidance etc. Some methods keep the raw measurements for past robot poses to address the sparsity problem often resulting in a pose only SLAM akin to laser scanner SLAM. For the stereo case, this is however impractical due to the high noise of stereo reconstructed point clouds. In this paper we propose a dense stereo V-SLAM algorithm that estimates a dense 3D map representation which is more accurate than raw stereo measurements. Thereto, we run a sparse V-SLAM system, take the resulting pose estimates to compute a locally dense representation from dense stereo correspondences. This dense representation is expressed in local coordinate systems which are tracked as part of the SLAM estimate. This allows the dense part to be continuously updated. Our system is driven by visual odometry priors to achieve high robustness when tracking landmarks. Moreover, the sparse part of the SLAM system uses recently published sub mapping techniques to achieve constant runtime complexity most of the time. The improved accuracy over raw stereo measurements is shown in a Monte Carlo simulation. Finally, we demonstrate the feasibility of our method by presenting outdoor experiments of a car like robot. I

    How to learn an illumination robust image feature for place recognition

    No full text
    Abstract—Place recognition for loop closure detection lies at the heart of every Simultaneous Localization and Mapping (SLAM) method. Recently methods that use cameras and describe the entire image by one holistic feature vector have experienced a resurgence. Despite the success of these methods, it remains unclear how a descriptor should be constructed for this particular purpose. The problem of choosing the right descriptor becomes even more pronounced in the context of life long mapping. The appearance of a place may vary considerably under different illumination conditions and over the course of a day. None of the handcrafted descriptors published in literature are particularly designed for this purpose. Herein, we propose to use a set of elementary building blocks from which millions of different descriptors can be constructed automatically. Moreover, we present an evaluation function which evaluates the performance of a given image descriptor for place recognition under severe lighting changes. Finally we present an algorithm to efficiently search the space of descriptors to find the best suited one. Evaluating the trained descriptor on a test set shows a clear superiority over its hand crafted counter parts like BRIEF and U-SURF. Finally we show how loop closures can be reliably detected using the automatically learned descriptor. Two overlapping image sequences from two different days and times are merged into one pose graph. The resulting merged pose graph is optimized and does not contain a single false link while at the same time all true loop closures were detected correctly. The descriptor and the place recognizer source code is pub-lished with datasets o

    Urban localization with camera and inertial measurement unit

    No full text
    Abstract—Next generation driver assistance systems require precise self localization. Common approaches using global navigation satellite systems (GNSSs) suffer from multipath and shadowing effects often rendering this solution insufficient. In urban environments this problem becomes even more pro-nounced. Herein we present a system for six degrees of freedom (DOF) ego localization using a mono camera and an inertial measure-ment unit (IMU). The camera image is processed to yield a rough position estimate using a previously computed landmark map. Thereafter IMU measurements are fused with the position estimate for a refined localization update. Moreover, we present the mapping pipeline required for the creation of landmark maps. Finally, we present experiments on real world data. The accu-racy of the system is evaluated by computing two independent ego positions of the same trajectory from two distinct cameras and investigating these estimates for consistency. A mean localization accuracy of 10 cm is achieved on a 10 km sequence in an inner city scenario. I

    Monocular Visual Odometry using a Planar Road Model to Solve Scale Ambiguity

    No full text
    Precise knowledge of a robots’s ego-motion is a crucial requirement for higher level tasks like autonomous navigation. Bundle adjustment based monocular visual odometry has proven to successfully estimate the motion of a robot for short sequences, but it suffers from an ambiguity in scale. Hence, approaches that only optimize locally are prone to drift in scale for sequences that span hundreds of frames. In this paper we present an approach to monocular visual odometry that compensates for drift in scale by applying constraints imposed by the known camera mounting and assumptions about the environment. To this end, we employ a continuously updated point cloud to estimate the camera poses based on 2d-3d-correspondences. Within this set of camera poses, we identify keyframes which are combined into a sliding window and reïŹned by bundle adjustment. Subsequently, we update the scale based on robustly tracked features on the road surface. Results on real datasets demonstrate a signiïŹcant increase in accuracy compared to the non-scaled scheme.</p

    UNICARagil - Disruptive Modular Architectures for Agile, Automated Vehicle Concepts

    No full text
    This paper introduces UNICARagil, a collaborative project carried out by aconsortium of seven Germanuniversities and six industrial partners, withfunding providedby the Federal Ministry of Education and Research of Germany. In the scope of thisproject,disruptive modular structures for agile, automatedvehicle concepts areresearched and developed. Four prototypevehiclesof different characteristics based onthe same modular platform are going to be buildup over a period of four years. The fourfully automated and driverless vehiclesdemonstratedisruptive architectures in hardware and software, as well as disruptive concepts in safety, security, verificationand validation. This paper outlines themost importantresearch questions underlying the project
    corecore